335 research outputs found

    Radar and satellite observations of precipitation: space time variability, cross-validation, and fusion

    Get PDF
    2017 Fall.Includes bibliographical references.Rainfall estimation based on satellite measurements has proven to be very useful for various applications. A number of precipitation products at multiple time and space scales have been developed based on satellite observations. For example, the National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center has developed a morphing technique (i.e., CMORPH) to produce global precipitation products by combining existing space-based observations and retrievals. The CMORPH products are derived using infrared (IR) brightness temperature information observed by geostationary satellites and passive microwave-(PMW) based precipitation retrievals from low earth orbit satellites. Although space-based precipitation products provide an excellent tool for regional, local, and global hydrologic and climate studies as well as improved situational awareness for operational forecasts, their accuracy is limited due to restrictions of spatial and temporal sampling and the applied parametric retrieval algorithms, particularly for light precipitation or extreme events such as heavy rain. In contrast, ground-based radar is an excellent tool for quantitative precipitation estimation (QPE) at finer space-time scales compared to satellites. This is especially true after the implementation of dual-polarization upgrades and further enhancement by urban scale X-band radar networks. As a result, ground radars are often critical for local scale rainfall estimation and for enabling forecasters to issue severe weather watches and warnings. Ground-based radars are also used for validation of various space measurements and products. In this study, a new S-band dual-polarization radar rainfall algorithm (DROPS2.0) is developed that can be applied to the National Weather Service (NWS) operational Weather Surveillance Radar-1988 Doppler (WSR-88DP) network. In addition, a real-time high-resolution QPE system is developed for the Engineering Research Center for Collaborative Adaptive Sensing of the Atmosphere (CASA) Dallas-Fort Worth (DFW) dense radar network, which is deployed for urban hydrometeorological applications via high-resolution observations of the lower atmosphere. The CASA/DFW QPE system is based on the combination of a standard WSR-88DP (i.e., KFWS radar) and a high-resolution dual-polarization X-band radar network. The specific radar rainfall methodologies at Sand X-band frequencies, as well as the fusion methodology merging radar observations at different temporal resolutions are investigated. Comparisons between rainfall products from the DFW radar network and rainfall measurements from rain gauges are conducted for a large number of precipitation events over several years of operation, demonstrating the excellent performance of this urban QPE system. The real-time DFW QPE products are extensively used for flood warning operations and hydrological modelling. The high-resolution DFW QPE products also serve as a reliable dataset for validation of Global Precipitation Measurement (GPM) satellite precipitation products. This study also introduces a machine learning-based data fusion system termed deep multi-layer perceptron (DMLP) to improve satellite-based precipitation estimation through incorporating ground radar-derived rainfall products. In particular, the CMORPH technique is applied first to derive combined PMW-based rainfall retrievals and IR data from multiple satellites. The combined PMW and IR data then serve as input to the proposed DMLP model. The high-quality rainfall products from ground radars are used as targets to train the DMLP model. In this dissertation, the prototype architecture of the DMLP model is detailed. The urban scale application over the DFW metroplex is presented. The DMLP-based rainfall products are evaluated using currently operational CMORPH products and surface rainfall measurements from gauge networks

    Moments That Matter: The Role of Emotional Stimuli at Event Boundaries in Memory

    Get PDF
    The present study examined the impact of event segmentation and emotional arousal on long-term memory performance. Event segmentation is the cognitive process of automatically dividing experiences into smaller pieces for better consolidation and retrieval, resulting in the formation of event boundaries. Prior research has identified the crucial role of event segmentation in long-term memory and working memory. However, few studies have explored ways to enhance its effects. Emotional arousal refers to the physiological and psychological activation of the body and mind in response to an emotional stimulus. Previous research has indicated that heightened levels of arousal may enhance memory performance. The present study seeks to investigate whether this phenomenon may extend to the impact of event segmentation on memory. In this 2 x 2 factorial study, 44 participants were exposed to a narrative TV episode containing emotionally arousing materials with varying arousal levels at different locations in the episode. The participants were subsequently tested to evaluate their ability to recognize, recall, and accurately recall the temporal order of the contents of the episode. The results indicated significant main effects of both break location and arousal level on memory, as well as a significant interaction between the two factors. The findings support the notion that event segmentation and emotionally arousing materials can enhance memory performance and suggest that high-arousal materials may amplify the effect of event segmentation on memory

    MoEController: Instruction-based Arbitrary Image Manipulation with Mixture-of-Expert Controllers

    Full text link
    Diffusion-model-based text-guided image generation has recently made astounding progress, producing fascinating results in open-domain image manipulation tasks. Few models, however, currently have complete zero-shot capabilities for both global and local image editing due to the complexity and diversity of image manipulation tasks. In this work, we propose a method with a mixture-of-expert (MOE) controllers to align the text-guided capacity of diffusion models with different kinds of human instructions, enabling our model to handle various open-domain image manipulation tasks with natural language instructions. First, we use large language models (ChatGPT) and conditional image synthesis models (ControlNet) to generate a large number of global image transfer dataset in addition to the instruction-based local image editing dataset. Then, using an MOE technique and task-specific adaptation training on a large-scale dataset, our conditional diffusion model can edit images globally and locally. Extensive experiments demonstrate that our approach performs surprisingly well on various image manipulation tasks when dealing with open-domain images and arbitrary human instructions. Please refer to our project page: [https://oppo-mente-lab.github.io/moe_controller/]Comment: 5 pages,6 figure

    Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning

    Full text link
    Recent progress in personalized image generation using diffusion models has been significant. However, development in the area of open-domain and non-fine-tuning personalized image generation is proceeding rather slowly. In this paper, we propose Subject-Diffusion, a novel open-domain personalized image generation model that, in addition to not requiring test-time fine-tuning, also only requires a single reference image to support personalized generation of single- or multi-subject in any domain. Firstly, we construct an automatic data labeling tool and use the LAION-Aesthetics dataset to construct a large-scale dataset consisting of 76M images and their corresponding subject detection bounding boxes, segmentation masks and text descriptions. Secondly, we design a new unified framework that combines text and image semantics by incorporating coarse location and fine-grained reference image control to maximize subject fidelity and generalization. Furthermore, we also adopt an attention control mechanism to support multi-subject generation. Extensive qualitative and quantitative results demonstrate that our method outperforms other SOTA frameworks in single, multiple, and human customized image generation. Please refer to our \href{https://oppo-mente-lab.github.io/subject_diffusion/}{project page}Comment: 14 pages, 10 figure

    Expediting Building Footprint Segmentation from High-resolution Remote Sensing Images via progressive lenient supervision

    Full text link
    The efficacy of building footprint segmentation from remotely sensed images has been hindered by model transfer effectiveness. Many existing building segmentation methods were developed upon the encoder-decoder architecture of U-Net, in which the encoder is finetuned from the newly developed backbone networks that are pre-trained on ImageNet. However, the heavy computational burden of the existing decoder designs hampers the successful transfer of these modern encoder networks to remote sensing tasks. Even the widely-adopted deep supervision strategy fails to mitigate these challenges due to its invalid loss in hybrid regions where foreground and background pixels are intermixed. In this paper, we conduct a comprehensive evaluation of existing decoder network designs for building footprint segmentation and propose an efficient framework denoted as BFSeg to enhance learning efficiency and effectiveness. Specifically, a densely-connected coarse-to-fine feature fusion decoder network that facilitates easy and fast feature fusion across scales is proposed. Moreover, considering the invalidity of hybrid regions in the down-sampled ground truth during the deep supervision process, we present a lenient deep supervision and distillation strategy that enables the network to learn proper knowledge from deep supervision. Building upon these advancements, we have developed a new family of building segmentation networks, which consistently surpass prior works with outstanding performance and efficiency across a wide range of newly developed encoder networks. The code will be released on https://github.com/HaonanGuo/BFSeg-Efficient-Building-Footprint-Segmentation-Framework.Comment: 13 pages,8 figures. Submitted to IEEE Transactions on Neural Networks and Learning System

    DeepCL: Deep Change Feature Learning on Remote Sensing Images in the Metric Space

    Full text link
    Change detection (CD) is an important yet challenging task in the Earth observation field for monitoring Earth surface dynamics. The advent of deep learning techniques has recently propelled automatic CD into a technological revolution. Nevertheless, deep learning-based CD methods are still plagued by two primary issues: 1) insufficient temporal relationship modeling and 2) pseudo-change misclassification. To address these issues, we complement the strong temporal modeling ability of metric learning with the prominent fitting ability of segmentation and propose a deep change feature learning (DeepCL) framework for robust and explainable CD. Firstly, we designed a hard sample-aware contrastive loss, which reweights the importance of hard and simple samples. This loss allows for explicit modeling of the temporal correlation between bi-temporal remote sensing images. Furthermore, the modeled temporal relations are utilized as knowledge prior to guide the segmentation process for detecting change regions. The DeepCL framework is thoroughly evaluated both theoretically and experimentally, demonstrating its superior feature discriminability, resilience against pseudo changes, and adaptability to a variety of CD algorithms. Extensive comparative experiments substantiate the quantitative and qualitative superiority of DeepCL over state-of-the-art CD approaches.Comment: 12 pages,7 figures, submitted to IEEE Transactions on Image Processin

    An adaptive method for inertia force identification in in cantilever under moving mass

    Get PDF
    The present study is concerned with the adaptive method based on wavelet transform to identify the inertia force between moving mass and cantilever. The basic model of cantilever is described and a classical identification method is introduced. Then the approximate equations about the model of cantilever can be obtained by the identification method. However, the order of modal adapted in the identification methods is usually constant which may make the identification results unsatisfied. As is known, the frequency of the highest order of modal is usually higher than the frequency of the input force in forward calculation methods. Therefore, wavelet transform is applied to decompose the data of deflection. The proportion of the low frequency component is chosen as the parameter of a binary function to decide the order of modal. The calculation results show that the adaptive method adapted in this paper is efficient to improve the accuracy of the inertia force between the moving mass and cantilever, and also the relationship between the proportion of low frequency component and the order of modal is indicated
    • …
    corecore